Goto

Collaborating Authors

 binary perceptron


Parametric RDT approach to computational gap of symmetric binary perceptron

Stojnic, Mihailo

arXiv.org Machine Learning

We study potential presence of statistical-computational gaps (SCG) in symmetric binary perceptrons (SBP) via a parametric utilization of \emph{fully lifted random duality theory} (fl-RDT) [96]. A structural change from decreasingly to arbitrarily ordered $c$-sequence (a key fl-RDT parametric component) is observed on the second lifting level and associated with \emph{satisfiability} ($α_c$) -- \emph{algorithmic} ($α_a$) constraints density threshold change thereby suggesting a potential existence of a nonzero computational gap $SCG=α_c-α_a$. The second level estimate is shown to match the theoretical $α_c$ whereas the $r\rightarrow \infty$ level one is proposed to correspond to $α_a$. For example, for the canonical SBP ($κ=1$ margin) we obtain $α_c\approx 1.8159$ on the second and $α_a\approx 1.6021$ (with converging tendency towards $\sim 1.59$ range) on the seventh level. Our propositions remarkably well concur with recent literature: (i) in [20] local entropy replica approach predicts $α_{LE}\approx 1.58$ as the onset of clustering defragmentation (presumed driving force behind locally improving algorithms failures); (ii) in $α\rightarrow 0$ regime we obtain on the third lifting level $κ\approx 1.2385\sqrt{\frac{α_a}{-\log\left ( α_a \right ) }}$ which qualitatively matches overlap gap property (OGP) based predictions of [43] and identically matches local entropy based predictions of [24]; (iii) $c$-sequence ordering change phenomenology mirrors the one observed in asymmetric binary perceptron (ABP) in [98] and the negative Hopfield model in [100]; and (iv) as in [98,100], we here design a CLuP based algorithm whose practical performance closely matches proposed theoretical predictions.


Binary perceptron computational gap -- a parametric fl RDT view

Stojnic, Mihailo

arXiv.org Machine Learning

Recent studies suggest that asymmetric binary perceptron (ABP) likely exhibits the so-called statistical-computational gap characterized with the appearance of two phase transitioning constraint density thresholds: \textbf{\emph{(i)}} the \emph{satisfiability threshold} $α_c$, below/above which ABP succeeds/fails to operate as a storage memory; and \textbf{\emph{(ii)}} \emph{algorithmic threshold} $α_a$, below/above which one can/cannot efficiently determine ABP's weight so that it operates as a storage memory. We consider a particular parametric utilization of \emph{fully lifted random duality theory} (fl RDT) [85] and study its potential ABP's algorithmic implications. A remarkable structural parametric change is uncovered as one progresses through fl RDT lifting levels. On the first two levels, the so-called $\c$ sequence -- a key parametric fl RDT component -- is of the (natural) decreasing type. A change of such phenomenology on higher levels is then connected to the $α_c$ -- $α_a$ threshold change. Namely, on the second level concrete numerical values give for the critical constraint density $α=α_c\approx 0.8331$. While progressing through higher levels decreases this estimate, already on the fifth level we observe a satisfactory level of convergence and obtain $α\approx 0.7764$. This allows to draw two striking parallels: \textbf{\emph{(i)}} the obtained constraint density estimate is in a remarkable agrement with range $α\in (0.77,0.78)$ of clustering defragmentation (believed to be responsible for failure of locally improving algorithms) [17,88]; and \textbf{\emph{(ii)}} the observed change of $\c$ sequence phenomenology closely matches the one of the negative Hopfield model for which the existence of efficient algorithms that closely approach similar type of threshold has been demonstrated recently [87].


Fully lifted \emph{blirp} interpolation -- a large deviation view

Stojnic, Mihailo

arXiv.org Machine Learning

In [104] a powerful fully lifted (fl) probabilistic blirp interpolating mechanism was introduced. It arrived as a strong upgrade on partially lifted concepts from [100, 101] and the basic ones from [49, 84] (see also, e.g., [31, 32, 60, 106] for early considerations as well as [5, 64, 67, 101, 107] for a brief history, relevance, and development overview). While the range of applicability in a variety of scientific fields is rather wide, applications in random optimizations are of our prevalent interest. They became particularly fruitful over the last two decades (some of the most prominent examples include, compressed sensing, machine learning, and neural network statistical studies; see, e.g., [50, 72-75, 86-91, 108]). Characterizing typical behavior of their various features ranging from standard optimization metrics (objective values, optimal solutions, relations between optimizing variables) to associated algorithmic ones (accuracy, speed, convergence) became possible in large part due to a strong progress made in understanding and developing powerful comparison mechanisms. For example, many of the above performance metrics often exhibit the so-calledphase-transition (PT) phenomenon where they undergo an abrupt change as one moves from one region of system parameters to another.


Rare dense solutions clusters in asymmetric binary perceptrons -- local entropy via fully lifted RDT

Stojnic, Mihailo

arXiv.org Machine Learning

We study classical asymmetric binary perceptron (ABP) and associated \emph{local entropy} (LE) as potential source of its algorithmic hardness. Isolation of \emph{typical} ABP solutions in SAT phase seemingly suggests a universal algorithmic hardness. Paradoxically, efficient algorithms do exist even for constraint densities $α$ fairly close but at a finite distance (\emph{computational gap}) from the capacity. In recent years, existence of rare large dense clusters and magical ability of fast algorithms to find them have been posited as the conceptual resolution of this paradox. Monotonicity or breakdown of the LEs associated with such \emph{atypical} clusters are predicated to play a key role in their thinning-out or even complete defragmentation. Invention of fully lifted random duality theory (fl RDT) [90,93,94] allows studying random structures \emph{typical} features. A large deviation upgrade, sfl LD RDT [96,97], moves things further and enables \emph{atypical} features characterizations as well. Utilizing the machinery of [96,97] we here develop a generic framework to study LE as an ABP's atypical feature. Already on the second level of lifting we discover that the LE results are closely matching those obtained through replica methods. For classical zero threshold ABP, we obtain that LE breaks down for $α$ in $(0.77,0.78)$ interval which basically matches $α\sim 0.75-0.77$ range that currently best ABP solvers can handle and effectively indicates that LE's behavior might indeed be among key reflections of the ABP's computational gaps presumable existence.


A note on the capacity of the binary perceptron

Altschuler, Dylan J., Tikhomirov, Konstantin

arXiv.org Artificial Intelligence

Determining the capacity $\alpha_c$ of the Binary Perceptron is a long-standing problem. Krauth and Mezard (1989) conjectured an explicit value of $\alpha_c$, approximately equal to .833, and a rigorous lower bound matching this prediction was recently established by Ding and Sun (2019). Regarding the upper bound, Kim and Roche (1998) and Talagrand (1999) independently showed that $\alpha_c$ < .996, while Krauth and Mezard outlined an argument which can be used to show that $\alpha_c$ < .847. The purpose of this expository note is to record a complete proof of the bound $\alpha_c$ < .847. The proof is a conditional first moment method combined with known results on the spherical perceptron


The Copycat Perceptron: Smashing Barriers Through Collective Learning

Catania, Giovanni, Decelle, Aurélien, Seoane, Beatriz

arXiv.org Artificial Intelligence

We characterize the equilibrium properties of a model of $y$ coupled binary perceptrons in the teacher-student scenario, subject to a learning rule, with an explicit ferromagnetic coupling proportional to the Hamming distance between the students' weights. In contrast to recent works, we analyze a more general setting in which thermal noise is present that affects each student's generalization performance. In the nonzero temperature regime, we find that the coupling of replicas produces a bend of the phase diagram towards smaller values of $\alpha$: This suggests that the free energy landscape gets smoother around the solution with perfect generalization (i.e., the teacher's) at a fixed fraction of examples, allowing standard thermal updates such as Simulated Annealing to easily reach the teacher solution and avoid entrapment in metastable states as it happens in the unreplicated case, even in the so-called computationally easy regime. These results provide additional analytic and numerical evidence for the recently conjectured Bayes-optimal property of Replicated Simulated Annealing (RSA) for a sufficient number of replicas. From a learning perspective, these results also suggest that multiple students working together (in this case reviewing the same data) are able to learn the same rule both significantly faster and with fewer examples, a property that could be exploited in the context of cooperative and federated learning.


Binary perceptrons capacity via fully lifted random duality theory

Stojnic, Mihailo

arXiv.org Machine Learning

We study the statistical capacity of the classical binary perceptrons with general thresholds $\kappa$. After recognizing the connection between the capacity and the bilinearly indexed (bli) random processes, we utilize a recent progress in studying such processes to characterize the capacity. In particular, we rely on \emph{fully lifted} random duality theory (fl RDT) established in \cite{Stojnicflrdt23} to create a general framework for studying the perceptrons' capacities. Successful underlying numerical evaluations are required for the framework (and ultimately the entire fl RDT machinery) to become fully practically operational. We present results obtained in that directions and uncover that the capacity characterizations are achieved on the second (first non-trivial) level of \emph{stationarized} full lifting. The obtained results \emph{exactly} match the replica symmetry breaking predictions obtained through statistical physics replica methods in \cite{KraMez89}. Most notably, for the famous zero-threshold scenario, $\kappa=0$, we uncover the well known $\alpha\approx0.8330786$ scaled capacity.


High-dimensional manifold of solutions in neural networks: insights from statistical physics

Malatesta, Enrico M.

arXiv.org Artificial Intelligence

In these pedagogic notes I review the statistical mechanics approach to neural networks, focusing on the paradigmatic example of the perceptron architecture with binary an continuous weights, in the classification setting. I will review the Gardner's approach based on replica method and the derivation of the SAT/UNSAT transition in the storage setting. Then, I discuss some recent works that unveiled how the zero training error configurations are geometrically arranged, and how this arrangement changes as the size of the training set increases. I also illustrate how different regions of solution space can be explored analytically and how the landscape in the vicinity of a solution can be characterized. I give evidence how, in binary weight models, algorithmic hardness is a consequence of the disappearance of a clustered region of solutions that extends to very large distances. Finally, I demonstrate how the study of linear mode connectivity between solutions can give insights into the average shape of the solution manifold.


Equivalence between algorithmic instability and transition to replica symmetry breaking in perceptron learning systems

Zhao, Yang, Qiu, Junbin, Xie, Mingshan, Huang, Haiping

arXiv.org Machine Learning

Binary perceptron is a fundamental model of supervised learning for the non-convex optimization, which is a root of the popular deep learning. Binary perceptron is able to achieve a classification of random high-dimensional data by computing the marginal probabilities of binary synapses. The relationship between the algorithmic instability and the equilibrium analysis of the model remains elusive. Here, we establish the relationship by showing that the instability condition around the algorithmic fixed point is identical to the instability for breaking the replica symmetric saddle point solution of the free energy function. Therefore, our analysis provides insights towards bridging the gap between non-convex learning dynamics and statistical mechanics properties of more complex neural networks.


Cumulative Sum Ranking

Milidiú, Ruy Luiz, Rocha, Rafael Henrique Santos

arXiv.org Machine Learning

The goal of Ordinal Regression is to find a rule that ranks items from a given set. Several learning algorithms to solve this prediction problem build an ensemble of binary classifiers. Ranking by Projecting uses interdependent binary perceptrons. These perceptrons share the same direction vector, but use different bias values. Similar approaches use independent direction vectors and biases. To combine the binary predictions, most of them adopt a simple counting heuristics. Here, we introduce a novel cumulative sum scoring function to combine the binary predictions. The proposed score value aggregates the strength of each one of the relevant binary classifications on how large is the item's rank. We show that our modeling casts ordinal regression as a Structured Perceptron problem. As a consequence, we simplify its formulation and description, which results in two simple online learning algorithms. The second algorithm is a Passive-Aggressive version of the first algorithm. We show that under some rank separability condition both algorithms converge. Furthermore, we provide mistake bounds for each one of the two online algorithms. For the Passive-Aggressive version, we assume the knowledge of a separation margin, what significantly improves the corresponding mistake bound. Additionally, we show that Ranking by Projecting is a special case of our prediction algorithm. From a neural network architecture point of view, our empirical findings suggest a layer of cusum units for ordinal regression, instead of the usual softmax layer of multiclass problems.